I have this data frame df
Items Item Code Prices
1 Beds 1630 $135.60
2 Big Shelve 1229 89.5USD
3 Small Shelve 1229 ¥3680.03
4 Chair 445 92.63€
5 Desk 802 206.43 euro
6 Lamp 832 25307.1 JPY
I want to split the prices column into three column: Prices and Currency and Exchange rate from USD using
Items Item Code Prices Currency Exchange rates
1 Beds 1630 135.60 USD 1.00
2 Big Shelve 1229 89.50 USD 1.00
3 Small Shelve 1229 3680.03 JPY 115.71
4 Chair 445 92.63 EUR 0.90
5 Desk 802 206.43 EUR 0.90
6 Lamp 832 25307.10 JPY 115.71
I tried using dplyr::separate() but instead it would separate at comma instead.
If I try using the gsub() it gives me this error
> df2 <- df %>%
+ mutate(price = as.numeric(gsub'[$,€,¥,]','', df$Col3))
Error: unexpected string constant in:
"df2 <- df %>%
mutate(price = as.numeric(gsub'[$,€,¥,]'"
Any ideas what to do? Also, how would I able to reference the currency to correct items?
This should solve the problem. Using the quantmod package, you can get the current exchange rate and add that into the data:
library(dplyr)
#>
#> Attaching package: 'dplyr'
#> The following objects are masked from 'package:stats':
#>
#> filter, lag
#> The following objects are masked from 'package:base':
#>
#> intersect, setdiff, setequal, union
library(stringr)
library(tidyr)
library(quantmod)
#> Loading required package: xts
#> Loading required package: zoo
#>
#> Attaching package: 'zoo'
#> The following objects are masked from 'package:base':
#>
#> as.Date, as.Date.numeric
#>
#> Attaching package: 'xts'
#> The following objects are masked from 'package:dplyr':
#>
#> first, last
#> Loading required package: TTR
#> Registered S3 method overwritten by 'quantmod':
#> method from
#> as.zoo.data.frame zoo
dat <- tibble::tribble(
~Items, ~"Item Code", ~Prices,
"Beds", 1630, "$135.60",
"Big Shelve", 1229, "89.5USD",
"Small Shelve", 1229, "¥3680.03",
"Chair", 445, "92.63€",
"Desk", 802, "206.43 euro",
"Lamp", 832, "25307.1 JPY")
dat <- dat %>%
mutate(currency = c(trimws(str_extract_all(Prices, "[^\\d\\.]+", simplify = TRUE))),
currency = case_when(currency %in% c("€", "euro") ~ "EUR",
currency == "$" ~ "USD",
currency == "¥" ~ "JPY",
TRUE ~ currency),
Prices = as.numeric(str_extract_all(Prices, "\\d+\\.\\d+", simplify=TRUE)),
xr = paste0("USD", currency, "=X")) %>%
left_join(getQuote(unique(.$xr)) %>% as_tibble(rownames = "xr") %>% select(xr, Last)) %>%
select(-xr) %>%
rename("Exchange rates" = "Last")
#> Joining, by = "xr"
dat
#> # A tibble: 6 × 5
#> Items `Item Code` Prices currency `Exchange rates`
#> <chr> <dbl> <dbl> <chr> <dbl>
#> 1 Beds 1630 136. USD 1
#> 2 Big Shelve 1229 89.5 USD 1
#> 3 Small Shelve 1229 3680. JPY 116.
#> 4 Chair 445 92.6 EUR 0.902
#> 5 Desk 802 206. EUR 0.902
#> 6 Lamp 832 25307. JPY 116.
Created on 2022-03-03 by the reprex package (v2.0.1)
Related
The code is as follows.
library(fable)
library(tsibble)
library(dplyr)
tourism_melb <- tourism %>%
filter(Region == "Melbourne")
tourism_melb %>%
group_by(Purpose) %>%
slice(1)
tourism_melb %>%
autoplot(Trips)
fit <- tourism_melb %>%
model(
ets = ETS(Trips ~ trend("A")),
arima = ARIMA(Trips)
)
fit %>%
accuracy() %>%
arrange(MASE)
Error in accuracy.default(.) :
No accuracy method found for an object of class mdl_dfNo accuracy method found for an object of class tbl_dfNo accuracy method found for an object of class tblNo accuracy method found for an object of class data.frame
What is the reason for the error in the last step?
This might be some configuration issue at your computer. I get some output instead of errors when I run your code.I am pasting the output from my console below, when I run your code.
> library(fable)
Loading required package: fabletools
> library(tsibble)
Attaching package: ‘tsibble’
The following objects are masked from ‘package:base’:
intersect, setdiff, union
> library(dplyr)
Attaching package: ‘dplyr’
The following objects are masked from ‘package:stats’:
filter, lag
The following objects are masked from ‘package:base’:
intersect, setdiff, setequal, union
>
> tourism_melb <- tourism %>%
+ filter(Region == "Melbourne")
> tourism_melb %>%
+ group_by(Purpose) %>%
+ slice(1)
# A tsibble: 4 x 5 [1Q]
# Key: Region, State, Purpose [4]
# Groups: Purpose [4]
Quarter Region State Purpose Trips
<qtr> <chr> <chr> <chr> <dbl>
1 1998 Q1 Melbourne Victoria Business 405.
2 1998 Q1 Melbourne Victoria Holiday 428.
3 1998 Q1 Melbourne Victoria Other 79.9
4 1998 Q1 Melbourne Victoria Visiting 666.
>
> tourism_melb %>%
+ autoplot(Trips)
>
> fit <- tourism_melb %>%
+ model(
+ ets = ETS(Trips ~ trend("A")),
+ arima = ARIMA(Trips)
+ )
>
> fit %>%
+ accuracy() %>%
+ arrange(MASE)
# A tibble: 8 × 13
Region State Purpose .model .type ME RMSE MAE
<chr> <chr> <chr> <chr> <chr> <dbl> <dbl> <dbl>
1 Melbourne Victo… Holiday ets Trai… 4.67 50.5 37.2
2 Melbourne Victo… Busine… ets Trai… 3.31 56.4 42.9
3 Melbourne Victo… Busine… arima Trai… 2.54 58.2 46.0
4 Melbourne Victo… Holiday arima Trai… -4.64 54.3 41.4
5 Melbourne Victo… Other arima Trai… -0.344 21.7 17.0
6 Melbourne Victo… Other ets Trai… -0.142 21.7 17.0
7 Melbourne Victo… Visiti… ets Trai… 8.17 60.9 51.4
8 Melbourne Victo… Visiti… arima Trai… 6.89 63.1 51.7
# … with 5 more variables: MPE <dbl>, MAPE <dbl>,
# MASE <dbl>, RMSSE <dbl>, ACF1 <dbl>
# ℹ Use `colnames()` to see all variable names
I have a factor that's words (instances of words that difference participants said). I want to collapse it so that there are the categories "that" (every instance of the word "that") and notThat (all other words combined into one category). Naturally there are a lot of other words, and I don't want to go through and type them all. I've tried using != in various places, but it won't work. Maybe I just have the syntax wrong?
Anyway, is there a way to do this? That is, collapse all words that aren't "that" into one group?
How about this:
library(forcats)
x <- c("that", "something", "else")
fct_collapse(x, that = c("that"), other_level="notThat")
#> [1] that notThat notThat
#> Levels: that notThat
Created on 2022-02-15 by the reprex package (v2.0.1)
Edit to show in a data frame
library(dplyr)
#>
#> Attaching package: 'dplyr'
#> The following objects are masked from 'package:stats':
#>
#> filter, lag
#> The following objects are masked from 'package:base':
#>
#> intersect, setdiff, setequal, union
library(forcats)
dat <- data.frame(
gender = factor(c(1,0,1,1,1,0), labels=c("male", "female")),
age = round(runif(6, 18,85)),
word = c("that", "something", "altogether", "different", "entirely", "that"))
dat %>%
mutate(word_collapse = fct_collapse(word, that="that", other_level="notThat"))
#> gender age word word_collapse
#> 1 female 74 that that
#> 2 male 72 something notThat
#> 3 female 57 altogether notThat
#> 4 female 44 different notThat
#> 5 female 79 entirely notThat
#> 6 male 81 that that
Created on 2022-02-15 by the reprex package (v2.0.1)
I'm new to tidymodels but apparently the step_pca() arguments such as nom_comp or threshold are not being implemented when being trained. as in example below, I'm still getting 4 component despite setting nom_comp = 2.
library(tidyverse)
library(tidymodels)
#> Registered S3 method overwritten by 'tune':
#> method from
#> required_pkgs.model_spec parsnip
rec <- recipe( ~ ., data = USArrests) %>%
step_normalize(all_numeric()) %>%
step_pca(all_numeric(), num_comp = 2)
prep(rec) %>% tidy(number = 2, type = "coef") %>%
pivot_wider(names_from = component, values_from = value, id_cols = terms)
#> # A tibble: 4 x 5
#> terms PC1 PC2 PC3 PC4
#> <chr> <dbl> <dbl> <dbl> <dbl>
#> 1 Murder -0.536 0.418 -0.341 0.649
#> 2 Assault -0.583 0.188 -0.268 -0.743
#> 3 UrbanPop -0.278 -0.873 -0.378 0.134
#> 4 Rape -0.543 -0.167 0.818 0.0890
The full PCA is determined (so you can still compute the variances of each term) and num_comp only specifies how many of the components are retained as predictors. If you want to specify the maximal rank, you can pass that through options:
library(recipes)
#> Loading required package: dplyr
#>
#> Attaching package: 'dplyr'
#> The following objects are masked from 'package:stats':
#>
#> filter, lag
#> The following objects are masked from 'package:base':
#>
#> intersect, setdiff, setequal, union
#>
#> Attaching package: 'recipes'
#> The following object is masked from 'package:stats':
#>
#> step
rec <- recipe( ~ ., data = USArrests) %>%
step_normalize(all_numeric()) %>%
step_pca(all_numeric(), num_comp = 2, options = list(rank. = 2))
prep(rec) %>% tidy(number = 2, type = "coef")
#> # A tibble: 8 × 4
#> terms value component id
#> <chr> <dbl> <chr> <chr>
#> 1 Murder -0.536 PC1 pca_AoFOm
#> 2 Assault -0.583 PC1 pca_AoFOm
#> 3 UrbanPop -0.278 PC1 pca_AoFOm
#> 4 Rape -0.543 PC1 pca_AoFOm
#> 5 Murder 0.418 PC2 pca_AoFOm
#> 6 Assault 0.188 PC2 pca_AoFOm
#> 7 UrbanPop -0.873 PC2 pca_AoFOm
#> 8 Rape -0.167 PC2 pca_AoFOm
Created on 2022-01-12 by the reprex package (v2.0.1)
You could also control this via the tol argument from stats::prcomp(), also passed in as an option.
If you bake the recipe it seems to work as intended but I don't know what you aim to achieve afterward.
library(tidyverse)
library(tidymodels)
USArrests <- USArrests %>%
rownames_to_column("Countries")
rec <-
recipe( ~ ., data = USArrests) %>%
step_normalize(all_numeric()) %>%
step_pca(all_numeric(), num_comp = 2)
prep(rec) %>%
bake(new_data = NULL)
#> # A tibble: 50 x 3
#> Countries PC1 PC2
#> <fct> <dbl> <dbl>
#> 1 Alabama -0.976 1.12
#> 2 Alaska -1.93 1.06
#> 3 Arizona -1.75 -0.738
#> 4 Arkansas 0.140 1.11
#> 5 California -2.50 -1.53
#> 6 Colorado -1.50 -0.978
#> 7 Connecticut 1.34 -1.08
#> 8 Delaware -0.0472 -0.322
#> 9 Florida -2.98 0.0388
#> 10 Georgia -1.62 1.27
#> # ... with 40 more rows
Created on 2022-01-11 by the reprex package (v2.0.1)
I have created a function which allows me to carry out time series forecasting using the fable package. The idea of the function was to analyse observed vs predicted values after a particular date/event. Here is a mock data frame which generates a column of dates:-
set.seed(1)
df <- data.frame(Date = sort(sample(seq(as.Date('2018/01/01'), as.Date('2020/09/17'), by="day"),1368883, replace = T)))
And here is the function I created. You specify the data, then the date of the event, then the forecast period in days and lastly a title for your graph.
event_analysis<-function(data,eventdate,period,title){
require(dplyr)
require(tsibble)
require(fable)
require(fabletools)
require(imputeTS)
require(ggplot2)
data_count<-data%>%
group_by(Date)%>%
summarise(Count=n())
data_count<-as_tsibble(data_count)
data_count<-na_mean(data_count)
train <- data_count %>%
#sample_frac(0.8)
filter(Date<=as.Date(eventdate))
fit <- train %>%
model(
ets = ETS(Count),
arima = ARIMA(Count),
snaive = SNAIVE(Count)
) %>%
mutate(mixed = (ets + arima + snaive) / 3)
fc <- fit %>% forecast(h = period)
forecastplot<-fc %>%
autoplot(data_count, level = NULL)+ggtitle(title)+
geom_vline(xintercept = as.Date(eventdate),linetype="dashed",color="red")+
labs(caption = "Red dashed line = Event occurrence")
fc_accuracy<-accuracy(fc,data_count)
#obs<-data_count
#colnames(obs)[2]<-"Observed"
#obs_pred<-merge(data_count,fc_accuracy, by="Date")
return(list(forecastplot,fc_accuracy,fc))
}
And in one run, I specify the df, the date of the event, the number of days that I want to forecast (3 weeks), then the title:-
event_analysis(df, "2020-01-01",21,"Event forecast")
Which will print this outcome and plot:-
I concede that the mock data I made isn't totally ideal but the function works well on my real-world data.
Here is what I want to achieve. I would like this output that has been made to come out of the function, but in addition, I would like an additional graph which "zooms in" on the period that has been forecasted, for 2 reasons:-
for ease of interpretation
I want to be able to see the N number of days before and N number of days after the event date (N representing the forecast period i.e. 21).
So, an additional graph (along with the original full forecast) that would look like this, perhaps in the one output, "multiplot" style:-
The other thing would be to print another output which shows the observed values in the test set against the predicted values from the models used in the forecasting.
These are basically the two additional things I want to add to my function but I am not sure how to go about this. Any help is massively appreciated :) .
I suppose you could rewrite it this way. I made a couple of adjustments to help you out.
set.seed(1)
df <- data.frame(Date = sort(sample(seq(as.Date('2018/01/01'), as.Date('2020/09/17'), by="day"),1368883, replace = T)))
event_analysis <- function(data, eventdate, period, title){
# in the future, you may think to move them out
library(dplyr)
library(tsibble)
library(fable)
library(fabletools)
library(imputeTS)
library(ggplot2)
# convert at the beginning
eventdate <- as.Date(eventdate)
# more compact sintax
data_count <- count(data, Date, name = "Count")
# better specify the date variable to avoid the message
data_count <- as_tsibble(data_count, index = Date)
# you need to complete missing dates, just in case
data_count <- tsibble::fill_gaps(data_count)
data_count <- na_mean(data_count)
train <- data_count %>%
filter(Date <= eventdate)
test <- data_count %>%
filter(Date > eventdate, Date <= (eventdate+period))
fit <- train %>%
model(
ets = ETS(Count),
arima = ARIMA(Count),
snaive = SNAIVE(Count)
) %>%
mutate(mixed = (ets + arima + snaive) / 3)
fc <- fit %>% forecast(h = period)
# your plot
forecastplot <- fc %>%
autoplot(data_count, level = NULL) +
ggtitle(title) +
geom_vline(xintercept = as.Date(eventdate), linetype = "dashed", color = "red") +
labs(caption = "Red dashed line = Event occurrence")
# plot just forecast and test
zoomfcstplot <- autoplot(fc) + autolayer(test, .vars = Count)
fc_accuracy <- accuracy(fc,data_count)
### EDIT: ###
# results vs test
res <- fc %>%
as_tibble() %>%
select(-Count) %>%
tidyr::pivot_wider(names_from = .model, values_from = .mean) %>%
inner_join(test, by = "Date")
##############
return(list(forecastplot = forecastplot,
zoomplot = zoomfcstplot,
accuracy = fc_accuracy,
forecast = fc,
results = res))
}
event_analysis(df,
eventdate = "2020-01-01",
period = 21,
title = "Event forecast")
Output:
#>
#> Attaching package: 'dplyr'
#> The following objects are masked from 'package:stats':
#>
#> filter, lag
#> The following objects are masked from 'package:base':
#>
#> intersect, setdiff, setequal, union
#> Carico il pacchetto richiesto: fabletools
#> Registered S3 method overwritten by 'quantmod':
#> method from
#> as.zoo.data.frame zoo
#> $forecastplot
#>
#> $zoomplot
#>
#> $accuracy
#> # A tibble: 4 x 9
#> .model .type ME RMSE MAE MPE MAPE MASE ACF1
#> <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 arima Test -16.8 41.8 35.2 -1.31 2.61 0.791 0.164
#> 2 ets Test -16.8 41.8 35.2 -1.31 2.61 0.791 0.164
#> 3 mixed Test -21.9 44.7 36.8 -1.68 2.73 0.825 -0.0682
#> 4 snaive Test -32.1 57.3 46.6 -2.43 3.45 1.05 -0.377
#>
#> $forecast
#> # A fable: 84 x 4 [1D]
#> # Key: .model [4]
#> .model Date Count .mean
#> <chr> <date> <dist> <dbl>
#> 1 ets 2020-01-02 N(1383, 1505) 1383.
#> 2 ets 2020-01-03 N(1383, 1505) 1383.
#> 3 ets 2020-01-04 N(1383, 1505) 1383.
#> 4 ets 2020-01-05 N(1383, 1505) 1383.
#> 5 ets 2020-01-06 N(1383, 1505) 1383.
#> 6 ets 2020-01-07 N(1383, 1505) 1383.
#> 7 ets 2020-01-08 N(1383, 1505) 1383.
#> 8 ets 2020-01-09 N(1383, 1505) 1383.
#> 9 ets 2020-01-10 N(1383, 1505) 1383.
#> 10 ets 2020-01-11 N(1383, 1505) 1383.
#> # ... with 74 more rows
#>
#> $results
#> # A tibble: 21 x 6
#> Date ets arima snaive mixed Count
#> <date> <dbl> <dbl> <dbl> <dbl> <int>
#> 1 2020-01-02 1383. 1383. 1386 1384. 1350
#> 2 2020-01-03 1383. 1383. 1366 1377. 1398
#> 3 2020-01-04 1383. 1383. 1426 1397. 1357
#> 4 2020-01-05 1383. 1383. 1398 1388. 1415
#> 5 2020-01-06 1383. 1383. 1431 1399. 1399
#> 6 2020-01-07 1383. 1383. 1431 1399. 1346
#> 7 2020-01-08 1383. 1383. 1350 1372. 1299
#> 8 2020-01-09 1383. 1383. 1386 1384. 1303
#> 9 2020-01-10 1383. 1383. 1366 1377. 1365
#> 10 2020-01-11 1383. 1383. 1426 1397. 1328
#> # ... with 11 more rows
I have sample data frame as below:
quoteiD <- c("q1","q2","q3","q4", "q5")
quote <- c("Unthinking respect for authority is the greatest enemy of truth.",
"In the middle of difficulty lies opportunity.",
"Intelligence is the ability to adapt to change.",
"Science is not only a disciple of reason but, also, one of romance and passion.",
"If I have seen further it is by standing on the shoulders of Giants.")
library(dplyr)
quotes <- tibble(quoteiD = quoteiD, quote= quote)
quotes
I have created some tidy text as below
library(tidytext)
data(stop_words)
tidy_words <- quotes %>%
unnest_tokens(word, quote) %>%
anti_join(stop_words) %>%
count( word, sort = TRUE)
tidy_words
Further, I have searched the synonyms using qdap package as below
library(qdap)
syns <- synonyms(tidy_words$word)
The qdap out put is a list , and I am looking to pick the first 5 synonym for each word in the tidy data frame and create a column called synonyms as below:
word n synonyms
ability 1 adeptness, aptitude, capability, capacity, competence
adapt 1 acclimatize, accommodate, adjust, alter, apply,
authority 1 ascendancy, charge, command, control, direction
What is an elegant way of merging the list of 5 words from qdap synonym function and separate by commas?
One way this can be done using a tidyverse solution is
library(plyr)
library(dplyr)
#>
#> Attaching package: 'dplyr'
#> The following objects are masked from 'package:plyr':
#>
#> arrange, count, desc, failwith, id, mutate, rename, summarise,
#> summarize
#> The following objects are masked from 'package:stats':
#>
#> filter, lag
#> The following objects are masked from 'package:base':
#>
#> intersect, setdiff, setequal, union
library(tidytext)
library(qdap)
#> Loading required package: qdapDictionaries
#> Loading required package: qdapRegex
#>
#> Attaching package: 'qdapRegex'
#> The following object is masked from 'package:dplyr':
#>
#> explain
#> Loading required package: qdapTools
#>
#> Attaching package: 'qdapTools'
#> The following object is masked from 'package:dplyr':
#>
#> id
#> The following object is masked from 'package:plyr':
#>
#> id
#> Loading required package: RColorBrewer
#>
#> Attaching package: 'qdap'
#> The following object is masked from 'package:dplyr':
#>
#> %>%
#> The following object is masked from 'package:base':
#>
#> Filter
library(tibble)
library(tidyr)
#>
#> Attaching package: 'tidyr'
#> The following object is masked from 'package:qdap':
#>
#> %>%
quotes <- tibble(quoteiD = paste0("q", 1:5),
quote= c(".\n\nthe ebodac consortium consists of partners: janssen (efpia), london school of hygiene and tropical medicine (lshtm),",
"world vision) mobile health software development and deployment in resource limited settings grameen\n\nas such, the ebodac consortium is well placed to tackle.",
"Intelligence is the ability to adapt to change.",
"Science is a of reason of romance and passion.",
"If I have seen further it is by standing on ."))
quotes
#> # A tibble: 5 x 2
#> quoteiD quote
#> <chr> <chr>
#> 1 q1 ".\n\nthe ebodac consortium consists of partners: janssen (efpia~
#> 2 q2 "world vision) mobile health software development and deployment~
#> 3 q3 Intelligence is the ability to adapt to change.
#> 4 q4 Science is a of reason of romance and passion.
#> 5 q5 If I have seen further it is by standing on .
data(stop_words)
tidy_words <- quotes %>%
unnest_tokens(word, quote) %>%
anti_join(stop_words) %>%
count( word, sort = TRUE)
#> Joining, by = "word"
tidy_words
#> # A tibble: 33 x 2
#> word n
#> <chr> <int>
#> 1 consortium 2
#> 2 ebodac 2
#> 3 ability 1
#> 4 adapt 1
#> 5 change 1
#> 6 consists 1
#> 7 deployment 1
#> 8 development 1
#> 9 efpia 1
#> 10 grameen 1
#> # ... with 23 more rows
syns <- synonyms(tidy_words$word)
#> no match for the following:
#> consortium, ebodac, consists, deployment, efpia, grameen, janssen, london, lshtm, partners, settings, software, tropical
#> ========================
syns %>%
plyr::ldply(data.frame) %>% # Change the list to a dataframe (See https://stackoverflow.com/questions/4227223/r-list-to-data-frame)
rename("Word_DefNumber" = 1, "Syn" = 2) %>% # Rename the columns with a name that is more intuitive
separate(Word_DefNumber, c("Word", "DefNumber"), sep = "\\.") %>% # Find the word part of the word and definition number
group_by(Word) %>% # Group by words, so that when we select rows it is done for each word
slice(1:5) %>% # Keep the first 5 rows for each word
summarise(synonyms = paste(Syn, collapse = ", ")) %>% # Combine the synonyms together comma separated using paste
ungroup() # So there are not unintended effects of having the data grouped when using the data later
#> # A tibble: 20 x 2
#> Word synonyms
#> <chr> <chr>
#> 1 ability adeptness, aptitude, capability, capacity, competence
#> 2 adapt acclimatize, accommodate, adjust, alter, apply
#> 3 change alter, convert, diversify, fluctuate, metamorphose
#> 4 development advance, advancement, evolution, expansion, growth
#> 5 health fitness, good condition, haleness, healthiness, robustness
#> 6 hygiene cleanliness, hygienics, sanitary measures, sanitation
#> 7 intelligence acumen, alertness, aptitude, brain power, brains
#> 8 limited bounded, checked, circumscribed, confined, constrained
#> 9 medicine cure, drug, medicament, medication, nostrum
#> 10 mobile ambulatory, itinerant, locomotive, migrant, motile
#> 11 passion animation, ardour, eagerness, emotion, excitement
#> 12 reason apprehension, brains, comprehension, intellect, judgment
#> 13 resource ability, capability, cleverness, ingenuity, initiative
#> 14 romance affair, affaire (du coeur), affair of the heart, amour, at~
#> 15 school academy, alma mater, college, department, discipline
#> 16 science body of knowledge, branch of knowledge, discipline, art, s~
#> 17 standing condition, credit, eminence, estimation, footing
#> 18 tackle accoutrements, apparatus, equipment, gear, implements
#> 19 vision eyes, eyesight, perception, seeing, sight
#> 20 world earth, earthly sphere, globe, everybody, everyone
Created on 2019-04-05 by the reprex package (v0.2.1)
Please note that plyr should be loaded before dplyr